I have a list with my employers project task list. When a user adds a new project to the list, I want to push some of the data the user entered into an external site (which is run by our parent company) to store in a separate project management system.
So my question is:
Is there som way to post data (some or all fields) to a PHP-script on an external server?
I don't have any programming experience with SharePoint (or .NET in general), so if this is something that can be done with Workflows I would be very happy.
As far as I'm aware there's no mechanism in SharePoint that would allow you to make a request to an external site when a list is updated.
Depending on how quickly the data added in to SharePoint needs to make its way in to the project management system, a polling solution may be your best bet.
Just create a PHP script that will pull the list from SharePoint and check for any changes (pushing them to your Project Management System is found), then setup a CRON job to run the script at given interval's (Hourly/daily etc depending on how quickly you need the changes to take effect)
Anyways, hope that helped somewhat :)
Look into List event handlers, you can attach event handlers on list item updates/adds etc.
http://blogs.msdn.com/b/brianwilson/archive/2007/03/05/part-1-event-handlers-everything-you-need-to-know-about-microsoft-office-sharepoint-portal-server-moss-event-handlers.aspx
you will need a way to communicate with the php system (web service or database insert/update via c#/vb.net code.
Related
I'm developing a web app using Laravel (a PHP framework). The app is going to be used by about 30 of my co-workers on their Windows laptops.
My co-workers interview people on a regular basis. They will use the web app to add a new profile to a database once they interview somebody for the first time and they will append notes to these profiles on subsequent visits. Profiles and notes are stored using MySQL, but since I'm using Laravel, I could easily switch to another database.
Sometimes, my co-workers have to interview people when they're offline. They might visit a group of interviewees, add a few profiles and add some notes to existing ones during a session without any internet access.
How should I approach this?
With a local web server on every laptop. I've seen applications ship with some kind of installer including a LAMP stack, but I can't find any documentation on this.
I could install the app and something like XAMPP on every laptop
myself. That would be possible, but in the future more people might use the app and not all of them might be located nearby.
I could use Service Workers, maybe in connection with a libray such
as UpUp. This seems to be the most elegant approach.
I'd like to give option (3) a try, but my app is database driven and I'm not sure whether I could realize this approach:
Would it be possible to write all the (relevant) data from the DB to - let's say - a JSON file which could be accessed instead of the DB when in offline mode? We don't have to handle much data (less than 100 small data records should be available during an interview session).
When my co-workers add profiles or notes in offline mode, is there any "Web Service" way to insert data into the db that has been entered?
Thanks
Pida
I would think of it as building the app in "two parts".
First, the front end uses ajax calls to the back end (which isn't anything but a REST API). If there isn't any network connection, store the data in the browser using local storage.
When the user later has network connection, you could send the data that exists in the local storage to the back end and clear the local storage.
If you add web servers on the laptops, the databases and info will only be stored on their local laptops and would not be synced.
You can build what you describe using service workers to cache your site's static content to make it available offline, and a specific fetch handler in the service worker to detect a failed PUT or POST and queue the data in IndexedDB. You'd then periodically check IndexedDB for any queued data when your web app is loaded, and attempt to resend it.
I've described this approach in more detail at https://developers.google.com/web/showcase/case-study/service-workers-iowa#updates-to-users-schedules
That article assumes the use of the sw-precache library for caching your site's static assets, and the sw-toolbox library to provide runtime fetch handlers that check for failed business-logic requests. It also uses a promise-based IndexedDB wrapper called simpleDB although I'd probably go with the more recent idb library nowadays.
I would like to create a simple notification icon that would display a number in the users system tray.
The application only needs to allow the input of an API key that it would use to fetch information from the server. So, for example:
http://www.example.com/api.php?key=dfg45tgyy67h
The PHP file will return two values, a number, and a URL. The number should appear in the system tray, and clicking on it should take you to the URL. The application should update the information at a specified interval, which can be hard-coded into the application.
I really have no idea how to do this, but can pick up things like this pretty quickly. So I would like to know some ways to accomplish this, or what the easiest method to use would be.
EDIT When I said PHP what I meant was that on the server it would be a PHP file serving up the information to the application. I didn't plan on creating the client application in PHP.
You can't do this with PHP as it has no way of interacting with a user's computer and and that includes the system tray. You'll need to write something that will run on their computer and then polls your PHP script for this information.
Use a cross-platform language and widget toolkit with both browser launching and system tray capabilities.
I must developing an network monitor to monitoring several components using snmp. I save all received data in a round robin database.
I started to create an web based configuration center, that allows users to add devices to be monitored and access all the graphs (using rrdtool) of all devices.
I must run an daily, week, month and yearly update of the database.
My question is, how can i launch an script that executes an snmp command to fetch the data from the device and stores it on the databse and runs on background ? By background, i mean that it is a process that not depends if the user has logged in in the web configuration page or not.
I never did something in PHP, therefore i am asking you.
I hope you can help me out. Thank you in advance.
Best regard.
I have developed such a system a few years ago. We used Cacti, in combination with Nagios and Smokeping. Of course, if your needs are simpler, you could use cron scripts to fetch your data. But Cacti is definetely worth a look (as well as Nagios, but unlike Cacti, it's not specifically targetted at RRD files)
Note that none of these systems require PHP. They run standalone, as daemons. It's then pretty straightforward to write a web interface on top of that.
How can New Relic tap into my app with a simple install? How does it know all the methods, requests, etc?
It works for RoR, PHP, etc.
Can anyone explain the technology behind it? I'm interested in tapping into my Rails app, but I want to do so smoothly like New Relic.
Thanks
First up, you will not manage to duplicate the functionality of NewRelic on your own. Ignoring the server-side, the rpm Gem is a pretty complex piece of software, doing a lot of stuff. Have a look at the source if you want to see how it hooks into the Rails system. The source is a worth a read, as it does some cool stuff in terms of threading and marshaling of the data before sending it back to their servers.
If you want a replacement because Newrelic is expensive (and rightly so, it's awesome at what it does), then have a look at the FreeRelic project on Github.
They are using ASPECT ORIENTED PROGRAMMING CONCEPTS AND Reflection heavily for Intercepting original method call and adding instrumentation around that.
In a general way, New Relic's gem inserts kinda middleware in your web framework, and collects data from your endpoint (think as a rails route) until it's response. After every "harvesting time" (defaults to 60 seconds), it sends a post request to NR services with this data.
You can also tailor data you need with Custom Metrics, Custom Events.
Is also possible to do queries with NRQL and build graphs with that (like you would do in Graphana).
They have a customize service for Wordpress too, but is a bit messy in the start.
Some options if you want to save some money is configure cloudwatch + datadog, but I would give a shot to their service if uptime is crucial for your app.
For a rails solution you could simply implement a more verbose logging level (development/debug level) and interrogate the production.log file for specific events, timings etc
For Java they are attaching a Java agent to JVM which intercepts method calls and monitor them. You can use AspectJ to replicate the same behaviour and log every method call to wherever you want, let's say create custom Cloudwatch metrics.
In case of Java it's bytecode ingestion. They "hacking" the key methods of your application server and add their code in it. Then they send relevant transaction info to their server, aggregating it and you can see the summary. It's really complicated process so I don't think one dev can implement it.
If you’re already familiar with New Relic’s application monitoring
then you probably know about New Relic’s agents that run in-process
on web apps collecting and reporting all sorts of details about whats
happening in the app. RUM leverages the agents to dynamically inject
JavaScript into pages as they are built. The injected JavaScript
collects timing information in the browser and contains details that
identify the specific app and web transaction processed on the
backend as well as how time was spent in the app for each request.
When a page completes loading in an end user’s browser, the
information is sent back to New Relic asynchronously – so it doesn’t
effect page load time.
You can turn RUM on/off via your application settings in New Relic.
As well, you can turn RUM on/off via the agent’s configuration file
(newrelic.yml – a ‘browser_monitoring auto_instrument’ flag has been
introduced).
The agents have been enhanced to automatically inject JavaScript into
the HTML pages so using RUM is as simple as checking the checkbox on
the New Relic control panel. However, if you’d prefer more control,
you can use New Relic’s Agent API to generate the JavaScript and thus
control exactly when and where the header and footer scripts are
included.
I'm building a web application, and I need to use an architecture that allows me to run it over two servers. The application scrapes information from other sites periodically, and on input from the end user. To do this I'm using Php+curl to scrape the information, Php or python to parse it and store the results in a MySQLDB.
Then I will use Python to run some algorithms on the data, this will happen both periodically and on input from the end user. I'm going to cache some of the results in the MySQL DB and sometimes if it is specific to the user, skip storing the data and serve it to the user.
I'm think of using Php for the website front end on a separate web server, running the Php spider, MySQL DB and python on another server.
What frame work(s) should I use for this kind of job? Is MVC and Cakephp a good solution? If so will I be able to control and monitor the Python code using it?
Thanks
How do go about implementing this?
Too big a question for an answer here. Certainly you don't want 2 sets of code for the scraping (1 for scheduled, 1 for demand) in addition to the added complication, you really don't want to be running job which will take an indefinite time to complete within the thread generated by a request to your webserver - user requests for a scrape should be run via the scheduling mechanism and reported back to users (although if necessary you could use Ajax polling to give the illusion that it's happening in the same thread).
What frame work(s) should I use?
Frameworks are not magic bullets. And you shouldn't be choosing a framework based primarily on the nature of the application you are writing. Certainly if specific, critical functionality is precluded by a specific framework, then you are using the wrong framework - but in my experience that has never been the case - you just need to write some code yourself.
using something more complex than a cron job
Yes, a cron job is probably not the right way to go for lots of reasons. If it were me I'd look at writing a daemon which would schedule scrapes (and accept connections from web page scripts to enqueue additional scrapes). But I'd run the scrapes as separate processes.
Is MVC a good architecture for this? (I'm new to MVC, architectures etc.)
No. Don't start by thinking whether a pattern fits the application - patterns are a useful tool for teaching but describe what code is not what it will be
(Your application might include some MVC patterns - but it should also include lots of other ones).
C.
I think you have already a clear Idea on how to organize your layers.
First of all you would need a Web Framework for your front-end.
You have many choices here, Cakephp afaik is a good choice and it is designed to force you to follow the design pattern MVC.
Then, you would need to design your database to store what users want to be spidered.
Your db will be accessed by your web application to store users requests, by your php script to know what to scrape and finally by your python batch to confirm to the users that the data requested is available.
A possible over-simplified scenario:
User register to your site
User commands to grab a random page from Wikipedia
Request is stored though CakePhp application to db
Cron php batch starts and checks db for new requests
Batch founds new request and scrapes from Wikipedia
Batch updates db with a scraped flag
Cron python batch starts and checks db for new scraped flag
Batch founds new scraped flag and parse Wikipedia to extract some tags
Batch updates db with a done flag
User founds the requested information on his profile.